Expansions for approximate maximum likelihood estimators of the fractional difference parameter
نویسندگان
چکیده
منابع مشابه
Second Order Expansions for the Distribution of the Maximum Likelihood Estimator of the Fractional Difference Parameter By Offer Lieberman
The maximum likelihood estimator (MLE) of the fractional difference parameter in the Gaussian ARFIMA(0, d, 0) model is well known to be asymptotically N(0, 6/π). This paper develops a second order asymptotic expansion to the distribution of this statistic. The correction term for the density is shown to be independent of d, so that the MLE is second order pivotal for d. This feature of the MLE ...
متن کاملSemidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global min...
متن کاملOn the Maximum Likelihood Estimators for some Generalized Pareto-like Frequency Distribution
Abstract. In this paper we consider some four-parametric, so-called Generalized Pareto-like Frequency Distribution, which have been constructed using stochastic Birth-Death Process in order to model phenomena arising in Bioinformatics (Astola and Danielian, 2007). As examples, two ”real data” sets on the number of proteins and number of residues for analyzing such distribution are given. The co...
متن کاملAPPLE: Approximate Path for Penalized Likelihood Estimators
In high-dimensional data analysis, penalized likelihood estimators are shown to provide superior results in both variable selection and parameter estimation. A new algorithm, APPLE, is proposed for calculating the Approximate Path for Penalized Likelihood Estimators. Both convex penalties (such as LASSO) and folded concave penalties (such as MCP) are considered. APPLE efficiently computes the s...
متن کاملThe Convergence of Lossy Maximum Likelihood Estimators
Given a sequence of observations (Xn)n≥1 and a family of probability distributions {Qθ}θ∈Θ, the lossy likelihood of a particular distribution Qθ given the data Xn 1 := (X1,X2, . . . ,Xn) is defined as Qθ(B(X 1 ,D)), where B(Xn 1 ,D) is the distortion-ball of radius D around the source sequence X n 1 . Here we investigate the convergence of maximizers of the lossy likelihood.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The Econometrics Journal
سال: 2005
ISSN: 1368-4221,1368-423X
DOI: 10.1111/j.1368-423x.2005.00169.x